perm filename CHAP3[4,KMC]1 blob sn#006447 filedate 1972-10-16 generic text, type T, neo UTF8
00100	         ANALOGY AND FUNCTIONAL EQUIVALENCE
00200	
00300	
00400	    I have stated that a simulation model of a symbolic system reproduces
00500	the behavior of that system at some input-output level. The reproduction
00600	is achieved through the operations of an algorithm which represents
00700	an organization of hypothetical symbol-processing mechanisms or procedures
00800	which have the ability to generate the I/O behavior of the processes
00900	under investigation.The algorithm must be an effective procedure, that is
01000	one which  really works in the manner intended by the model-builders. In the model
01200	herein described our paranoid algorithm generates linguistic I/O behavior
01300	typical of patients whose thought processes are dominated by the paranoid mode.
01400	Given that the manifest outermost I/O behavior of the model is
01500	indistinguishable from the manifest outward I/O behavior of paranoid
01600	patients, does this imply that the hypothetical underlying processes used
01700	by the model are analogous to or the same as the underlying processes
01800	used by persons in the paranoid mode. This deep and thorny question
01900	should be approached with caution and only when we are first armed with some clear notions
02000	about analogy, similarity, indistinguishability and functional  equivalence.
02100	    In comparing two things (objects, systems or processes ) one can cite properties they
02200	have in common, properties they do not share and properties  regarding which
02300	it is difficult to tell. No two things are exactly alike in every detail.
02400	If they were identical in respect to all their properties then they would be copies. If
02500	they were identical in every respect in cluding their spatio-temporal
02600	location we would say we have only one thing instead of two. One can
02700	assert with some justification that a given thing  is not similar to
02900	anything else in the world or it is similar to evrything else in the world 
03000	depending upon how we cited properties.
03100	    Similarity relations are used in processes of classification in which
03200	objects are grouped into classes , the classes then representing object-
03300	concepts. The members of a class of object-concepts resemble one another
03400	in sharing certain properties. The resemblance between members of the class
03500	is not exact or total. Members of a class are considered more or less alike
03600	and there exist degrees of resemblance. A classification may involve only single
03610	properties while a taxonomy seeks to classify things according to their
03620	structure or organization. Thus a simulation model contributes to taxonomy
03630	in that since model X is structurally analogous to its subject Y, Y is to be
03635	viewed as belonging to the same class as X.
03800	   In an analogy a comparison is drawn between two things. `Newton did not
03900	show the cause of the apple falling but he showed a similitude bewteen the
04000	apple and the stars.'(D`Arcy Thompson). Huygens suggested an analogy between
04100	sound waves and light waves in order to understand something less well-understood
04200	(light)in terms of something better understood(sound).To account for species
04300	variation Darwin postulated a mechanism of natural selection. He constructed
04400	an analogy from two sources, one from artificial selection as practiced
04500	by domestic breeders of animals and one from Malthus' theory of a competetion
04600	for existence in a population increasing geometrically while its resources
04700	increase arithmetically. Bohr's model of the atom offered an analogy between
04800	solar system and atom. These few well-known historical examples make vivid
04900	the role of analogies in theory construction. Such analogies are partial
05000	paramorphs (Harre,1971) in that two systems are compared for parallelisms
05100	and they are compared only in respect to certain properties, not all
05200	properties. Bohr's model of the atom as a miniature planetary system was
05300	not intended to suggest that electrons possessed color or that planets
05400	jumped out of their orbits.
05500	   When human thought is the subject of a simulation model, we draw from
05600	two sources, symbolic computation and psychology, an analogy between
05700	systems known to be able to process symbols, persons and computers. The
05800	properties compared in the analogy  are obviously not physical or substantive
05900	such as blood and wires, but functional and procedural. We want to assume
06000	that the not well- understood mechanisms of thought in a person are
06100	similar to the somewhat better understood mechanisms of symbol-processing
06200	which take place in a computer. The analogy is one of functional           
06300	or procedural equivalence. If model and human are indistinguishable at a manifest
06400	I/O level, then they can be considered weakly equivalent. If they are
06500	indistinguishable at deeper and deeper I/O levels, then strong equivalence
06600	becomes  achieved. (See Fodor,1968). How stringent and how deep are the
06700	demands for equivalence to be? Must there be point-to-point correspondences
06800	at every level? What is to count as a point and what are the levels?
06810	Procedures can be specified and ostensively pointed to in an algorithm
06820	but how are we to identify a symbolic process in a person's head?
06900	Does a demonstration of functional equivalence constitute an explanation of observable
07000	behavior?
07100	   In constructing an algorithm one puts together an organization
07200	of collaborating functions. (As mentioned, i use the terms `function',   
07210	`procedure' and `mechanism' interchangeably.) A function takes some symbolic
07300	structure as input and yields some other symbolic structure as output.
07400	Two computationally equivalent functions, having the same input and yielding
07500	the same output, can differ `inside' the function at the instruction level.
07600	   Consider an elementary programming problem which students in symbolic
07700	computation are commonly asked to solve. Given a list L of symbols,
07800	L=(A B C D), as input, construct a function or procedure which will
07900	convert this list to the list RL in which the order of the symbols is
08000	reversed, i.e. RL=(D C B A). Here are some examples of functions which
08100	will carry out the operation of reversal. (They are written in the high-level
08200	programming language MLISP).                                               
08400	        REVERSE1 (L);
08500	          BEGIN 
08600	            NEW RL;
08700	            RETURN FOR NEW I IN L DO
08800	              RL ← I CONS RL;
08900	          END;
09000	
09100	       REVERSE2 (L);
09200	         BEGIN
09300	           NEW RL, LEN;
09400	           LEN ← LENGTH (L);
09500	           FOR NEW N ← 1 TO LEN DO
09600	             RL[N] ← L [LEN - N + 1];
09700	           RETURN RL;
09800	         END;
09900	       REVERSE3 (L);
10000	         REVERSE3A (L,NIL);
10100	
10200	       REVERSE3A (L,RL);
10300	         IF NULL L THEN RL
10400	         ELSE REVERSE3A (CDR L, CAR L CONS RL);
10500	   Each of these computational functions takes a list of symbols, L, as
10600	input and produces a new list, RL, in which the order of the symbols on the
10700	input list is reversed. It is at this I/O level that the functions can
10800	be said to be equivalent. Looking inside the functions one can see
10900	similarities as well as differences at the level of the individual
11000	instructions. For instance, REVERSE1 steps down the input list L, takes
11100	each symbol found and inserts it at the front of the new list RL. On the
11200	other hand, REVERSE2 counts the length of the input list L using another
11300	function called LENGTH which determines the length of a list. REVERSE2
11400	then uses index expressions on both sides of an assignment operator, ← ,
11500	(a) to obtain a position in the list RL, (b) to obtain a symbol in the list 
11600	L and (c) to assign the symbol to that position in the reversed list RL.
11700	Notice that REVERSE1 and REVERSE2 are similar in that they use FOR loops
11800	while REVERSE3, which calls another function REVERSE3A, does not. REVERSE3A
11900	is different from all the others in that it contains an IF expression.
12000	   Hence similariries and differences can be cited between functions as
12100	long as we are clear about levels and degrees of detail. The above-described
12200	functions are computationally equivalent at the input-output level since
12300	they take the same symbolic structures as input and produce the same
12400	symbolic output.
12500	  If we propose that an algorithm we have constructed is functionally
12600	equivalent to what goes on in humans when they process symbolic structures,
12700	how can we justify this position ? Indistinguishability tests at, say,
12800	the linguistic level provide evidence only for weak equivalence. We
12900	would like to be able to get inside the underlying processes in humans
13000	the way we can with an algorithm by inspecting its instructional code.
13100	The difficulty lies in identifying, making tangible and counting processes
13200	in human heads. We must have great patience with the neural sciences and psychology.
13300	   In the meantime, besides weak equivalence and plausibility arguments, 
13310	one can appeal to extra-evidential support from other
13400	relevant domains. One can offer analogies between what is known to go on at  
13500	a molecular level in living organisms and what goes on in an algorithm.
13600	Foe example, a DNA molecule in the nucleus of a cell consists of an
13700	ordered sequence (list) of nucleotide bases (symbols) coded in triplets
13800	termed codons (words). Each element of the codon specifies which amino
13900	acid during protein synthesis is to be linked into the chain of polypeptides
14000	making up the protein. The codons function like instructions in a 
14100	programming language. One codon is known to operate as a terminal symbol
14200	analogous to symbols in an algorithm which terminate the end of a list.
14300	If a stop codon appears in the middle of a sequence rather than at its
14400	normal terminal position, as in a point mutation, further protein
14500	synthesis is prevented. The polypeptide chain resulting is abnormal
14600	and may have lethal or trivial consequences for the organism depending
14700	on what other collaborating require to be handed over to them. Similarly
14800	in a algorithm. To use our previous programming example, the list l
14900	consisting of the symbols (A B C D) actually contains the terminal
15000	symbol NIL which is left unwritten because it is taken as a convention.
15100	If in reversing the list (A B C D NIL) the symbol NIL appeared in the
15200	middle of the list,i.e. (A B NIL C D), then the reversed list RL would
15300	,
15400	would contain only (B A) instead of the expected (D C B A) because
15500	the terminal symbol had been encountered. Such a result may be lethal
15600	or trivial to the algorithm depending on what other functions require
15700	as input from the reversing function. Each function in a algorithm
15800	is embedded in an organization of collaborating functions just as
15900	is the case in living organisms.
16000	   We know that at the molecular level of living organisms there exist
16100	rules for processes such as serial progression along a nucleotide
16200	sequence which are analogous to stepping down a list in an algorithm.
16300	Further analogies can be made between point mutations in which DNA
16400	codons can be inserted, deleted, substituted or reordered and  symbolic
16500	computation in which the same operations are commonly carried out.
16600	Such analogies are interesting as extraevidential support but obviously
16700	closer linkages are  needed between the macro-level of thought processes
16800	and the micro-level of molecular information-processing .
16900	   To obtain evidence for the acceptability of the model empirical tests
17000	are utilized in evaluation procedures. Such tests should also tell us
17100	which is the best among alternative models. Once we have the `best available'
17200	model can we be sure it is correct? We can never know with certainty. Theories
17300	Models have a short half-life as approximations and become superseded by better ones.